Adaptive networks consist of a collection of nodes with adaptation andlearning abilities. The nodes interact with each other on a local level anddiffuse information across the network to solve estimation and inference tasksin a distributed manner. In this work, we compare the mean-square performanceof two main strategies for distributed estimation over networks: consensusstrategies and diffusion strategies. The analysis in the paper confirms thatunder constant step-sizes, diffusion strategies allow information to diffusemore thoroughly through the network and this property has a favorable effect onthe evolution of the network: diffusion networks are shown to converge fasterand reach lower mean-square deviation than consensus networks, and theirmean-square stability is insensitive to the choice of the combination weights.In contrast, and surprisingly, it is shown that consensus networks can becomeunstable even if all the individual nodes are stable and able to solve theestimation task on their own. When this occurs, cooperation over the networkleads to a catastrophic failure of the estimation task. This phenomenon doesnot occur for diffusion networks: we show that stability of the individualnodes always ensures stability of the diffusion network irrespective of thecombination topology. Simulation results support the theoretical findings.
展开▼